Probabilistic Submodular Maximization in Sub-Linear Time
نویسندگان
چکیده
In this paper, we consider optimizing submodular functions that are drawn from some unknown distribution. This setting arises, e.g., in recommender systems, where the utility of a subset of items may depend on a user-specific submodular utility function. In modern applications, the ground set of items is often so large that even the widely used (lazy) greedy algorithm is not efficient enough. As a remedy, we introduce the problem of sublinear time probabilistic submodular maximization: Given training examples of functions (e.g., via user feature vectors), we seek to reduce the ground set so that optimizing new functions drawn from the same distribution will provide almost as much value when restricted to the reduced ground set as when using the full set. We cast this problem as a two-stage submodular maximization and develop a novel efficient algorithm for this problem which offers a 1 2 (1 − 1 e2 ) approximation ratio for general monotone submodular functions and general matroid constraints. We demonstrate the effectiveness of our approach on several real-world problem instances where running the maximization problem on the reduced ground set leads to two folds speed-up while incurring almost no loss.
منابع مشابه
Submodular Maximization and Diversity in Structured Output Spaces
We study the greedy maximization of a submodular set function F : 2 → R when each item in the ground set V is itself a combinatorial object, e.g. a configuration or labeling of a base set of variables z = {z1, ..., zm}. This problem arises naturally in a number of domains, such as Computer Vision or Natural Language Processing, where we want to search for a set of diverse high-quality solutions...
متن کاملAdapting Kernel Representations Online Using Submodular Maximization
Kernel representations provide a nonlinear representation, through similarities to prototypes, but require only simple linear learning algorithms given those prototypes. In a continual learning setting, with a constant stream of observations, it is critical to have an efficient mechanism for sub-selecting prototypes amongst observations. In this work, we develop an approximately submodular crit...
متن کاملGuaranteed Non-convex Optimization: Submodular Maximization over Continuous Domains
Submodular continuous functions are a category of (generally) non-convex/nonconcave functions with a wide spectrum of applications. We characterize these functions and demonstrate that they can be maximized efficiently with approximation guarantees. Specifically, I) for monotone submodular continuous functions with an additional diminishing returns property, we propose a Frank-Wolfe style algor...
متن کاملMonotone k-Submodular Function Maximization with Size Constraints
A k-submodular function is a generalization of a submodular function, where the input consists of k disjoint subsets, instead of a single subset, of the domain. Many machine learning problems, including influence maximization with k kinds of topics and sensor placement with k kinds of sensors, can be naturally modeled as the problem of maximizing monotone k-submodular functions. In this paper, ...
متن کاملSubmodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets
To cope with the high level of ambiguity faced in domains such as Computer Vision or Natural Language processing, robust prediction methods often search for a diverse set of high-quality candidate solutions or proposals. In structured prediction problems, this becomes a daunting task, as the solution space (image labelings, sentence parses, etc.) is exponentially large. We study greedy algorith...
متن کامل